92 research outputs found

    The tipping point: a mathematical model for the profit-driven abandonment of restaurant tipping

    Full text link
    The custom of voluntarily tipping for services rendered has gone in and out of fashion in America since its introduction in the 19th century. Restaurant owners that ban tipping in their establishments often claim that social justice drives their decisions, but we show that rational profit-maximization may also justify the decisions. Here, we propose a conceptual model of restaurant competition for staff and customers, and we show that there exists a critical conventional tip rate at which restaurant owners should eliminate tipping to maximize profit. Because the conventional tip rate has been increasing steadily for the last several decades, our model suggests that restaurant owners may abandon tipping en masse when that critical tip rate is reached.Comment: 14 pages, 5 figures, supplementary material include

    Hybrid statistical and mechanistic mathematical model guides mobile health intervention for chronic pain

    Full text link
    Nearly a quarter of visits to the Emergency Department are for conditions that could have been managed via outpatient treatment; improvements that allow patients to quickly recognize and receive appropriate treatment are crucial. The growing popularity of mobile technology creates new opportunities for real-time adaptive medical intervention, and the simultaneous growth of big data sources allows for preparation of personalized recommendations. Here we focus on the reduction of chronic suffering in the sickle cell disease community. Sickle cell disease is a chronic blood disorder in which pain is the most frequent complication. There currently is no standard algorithm or analytical method for real-time adaptive treatment recommendations for pain. Furthermore, current state-of-the-art methods have difficulty in handling continuous-time decision optimization using big data. Facing these challenges, in this study we aim to develop new mathematical tools for incorporating mobile technology into personalized treatment plans for pain. We present a new hybrid model for the dynamics of subjective pain that consists of a dynamical systems approach using differential equations to predict future pain levels, as well as a statistical approach tying system parameters to patient data (both personal characteristics and medication response history). Pilot testing of our approach suggests that it has significant potential to predict pain dynamics given patients' reported pain levels and medication usages. With more abundant data, our hybrid approach should allow physicians to make personalized, data driven recommendations for treating chronic pain.Comment: 13 pages, 15 figures, 5 table

    Handicap principle implies emergence of dimorphic ornaments

    Get PDF
    Species spanning the animal kingdom have evolved extravagant and costly ornaments to attract mating partners. Zahavi's handicap principle offers an elegant explanation for this: ornaments signal individual quality, and must be costly to ensure honest signalling, making mate selection more efficient. Here, we incorporate the assumptions of the handicap principle into a mathematical model and show that they are sufficient to explain the heretofore puzzling observation of bimodally distributed ornament sizes in a variety of species

    Head and neck cancer predictive risk estimator to determine control and therapeutic outcomes of radiotherapy (HNC-PREDICTOR):development, international multi-institutional validation, and web implementation of clinic-ready model-based risk stratification for head and neck cancer

    Get PDF
    Background: Personalised radiotherapy can improve treatment outcomes of patients with head and neck cancer (HNC), where currently a ‘one-dose-fits-all’ approach is the standard. The aim was to establish individualised outcome prediction based on multi-institutional international ‘big-data’ to facilitate risk-based stratification of patients with HNC. Methods: The data of 4611 HNC radiotherapy patients from three academic cancer centres were split into four cohorts: a training (n = 2241), independent test (n = 786), and external validation cohorts 1 (n = 1087) and 2 (n = 497). Tumour- and patient-related clinical variables were considered in a machine learning pipeline to predict overall survival (primary end-point) and local and regional tumour control (secondary end-points); serially, imaging features were considered for optional model improvement. Finally, patients were stratified into high-, intermediate-, and low-risk groups. Results: Performance score, AJCC8th stage, pack-years, and Age were identified as predictors for overall survival, demonstrating good performance in both the training cohort (c-index = 0.72 [95% CI, 0.66–0.77]) and in all three validation cohorts (c-indices: 0.76 [0.69–0.83], 0.73 [0.68–0.77], and 0.75 [0.68–0.80]). Excellent stratification of patients with HNC into high, intermediate, and low mortality risk was achieved; with 5-year overall survival rates of 17–46% for the high-risk group compared to 92–98% for the low-risk group. The addition of morphological image feature further improved the performance (c-index = 0.73 [0.64–0.81]). These models are integrated in a clinic-ready interactive web interface: https://uic-evl.github.io/hnc-predictor/ Conclusions: Robust model-based prediction was able to stratify patients with HNC in distinct high, intermediate, and low mortality risk groups. This can effectively be capitalised for personalised radiotherapy, e.g., for tumour radiation dose escalation/de-escalation

    Intensity standardization methods in magnetic resonance imaging of head and neck cancer

    Get PDF
    BACKGROUND AND PURPOSE: Conventional magnetic resonance imaging (MRI) poses challenges in quantitative analysis because voxel intensity values lack physical meaning. While intensity standardization methods exist, their effects on head and neck MRI have not been investigated. We developed a workflow based on healthy tissue region of interest (ROI) analysis to determine intensity consistency within a patient cohort. Through this workflow, we systematically evaluated intensity standardization methods for MRI of head and neck cancer (HNC) patients.MATERIALS AND METHODS: Two HNC cohorts (30 patients total) were retrospectively analyzed. One cohort was imaged with heterogenous acquisition parameters (HET cohort), whereas the other was imaged with homogenous acquisition parameters (HOM cohort). The standard deviation of cohort-level normalized mean intensity (SD NMI c), a metric of intensity consistency, was calculated across ROIs to determine the effect of five intensity standardization methods on T2-weighted images. For each cohort, a Friedman test followed by a post-hoc Bonferroni-corrected Wilcoxon signed-rank test was conducted to compare SD NMI c among methods. RESULTS: Consistency (SD NMI c across ROIs) between unstandardized images was substantially more impaired in the HET cohort (0.29 ± 0.08) than in the HOM cohort (0.15 ± 0.03). Consequently, corrected p-values for intensity standardization methods with lower SD NMI c compared to unstandardized images were significant in the HET cohort (p &lt; 0.05) but not significant in the HOM cohort (p &gt; 0.05). In both cohorts, differences between methods were often minimal and nonsignificant. CONCLUSIONS: Our findings stress the importance of intensity standardization, either through the utilization of uniform acquisition parameters or specific intensity standardization methods, and the need for testing intensity consistency before performing quantitative analysis of HNC MRI.</p

    Evaluation of the methodological quality of studies of the performance of diagnostic tests for bovine tuberculosis using QUADAS

    Get PDF
    There has been little assessment of the methodological quality of studies measuring the performance (sensitivity and/or specificity) of diagnostic tests for animal diseases. In a systematic review, 190 studies of tests for bovine tuberculosis (bTB) in cattle (published 1934-2009) were assessed by at least one of 18 reviewers using the QUADAS (Quality Assessment of Diagnostic Accuracy Studies) checklist adapted for animal disease tests. VETQUADAS (VQ) included items measuring clarity in reporting (n = 3), internal validity (n = 9) and external validity (n = 2). A similar pattern for compliance was observed in studies of different diagnostic test types. Compliance significantly improved with year of publication for all items measuring clarity in reporting and external validity but only improved in four of the nine items measuring internal validity (p < 0.05). 107 references, of which 83 had performance data eligible for inclusion in a meta-analysis were reviewed by two reviewers. In these references, agreement between reviewers' responses was 71% for compliance, 32% for unsure and 29% for non-compliance. Mean compliance with reporting items was 2, 5.2 for internal validity and 1.5 for external validity. The index test result was described in sufficient detail in 80.1% of studies and was interpreted without knowledge of the reference standard test result in only 33.1%. Loss to follow-up was adequately explained in only 31.1% of studies. The prevalence of deficiencies observed may be due to inadequate reporting but may also reflect lack of attention to methodological issues that could bias the results of diagnostic test performance estimates. QUADAS was a useful tool for assessing and comparing the quality of studies measuring the performance of diagnostic tests but might be improved further by including explicit assessment of population sampling strategy.The SE3238 project “Meta-analysis of diagnostic tests and modelling to identify appropriate testing strategies to reduce M. bovis infection in GB herds” was funded by the UK Department for Environment, Food and Rural Affairs (Defra).http://www.elsevier.com/locate/prevetmedam2018Veterinary Tropical Disease
    • 

    corecore